693 research outputs found

    Protein-protein interactions in human pluripotent stem cell-derived neural stem cells and their neuronal progeny

    Get PDF
    While most approaches in cell-based disease modeling are focused on the effects of defined mutations on the molecular or cellular phenotype, the assessment of underlying alterations in the interactomes of disease-relevant proteins has faced several technical challenges. First, experiments were typically conducted using overexpression paradigms resulting in unphysiologically high protein levels and thus promoting unspecific interactions. Second, such studies have been relying mostly on transformed cell lines, which enable mass production of transgenic cells but do not exhibit a tissue-specific proteomic environment. For that reason, the present study aimed at addressing these issues by bacterial artificial chromosome (BAC)-based expression of tagged proteins in pluripotent stem cell-derived long-term neuroepithelial like stem cells (lt-NES cells), a stable and robust cell population, which generates authentic human neurons with high fidelity. Tagged proteins were found to be expressed at endogenous levels, and fluorescence in situ hybridisation (FISH) analyses revealed an average integration rate of one copy per genome for the majority of cell lines analyzed. Correct compartmentalization and size of the tagged proteins could be confirmed by high-resolution confocal and live cell imaging as well as Western immunoblotting analysis, respectively. Employing this approach, multiple cell lines were generated harboring tagged proteins associated with human developmental disorders, cancer and neurodegeneration. Representatives of these groups include Proliferating Cell Nuclear Antigen (PCNA), Aurora Kinase A (AURKA), Cyclin-Dependent Kinase 2-Associated Protein 1 (CDK2AP1), Set Domain-Containing Protein 1B (SETD1B), RuvB-Like 2 (RUVBL2), the Methyl CpG Binding Protein 2 (MECP2) and the Alzheimer’s disease-associated proteins Nicastrin (NCSTN) and Valosin-Containing Protein (VCP). Using a label-free, quantitative affinity purification-mass spectrometry approach, numerous novel interaction partner candidates of these proteins were identified. Direct comparison of protein-specific interactomes of proliferating lt-NES cells and their neuronal progeny further revealed changes in the composition of several chromatin-remodeling complexes, suggesting that this system is sufficiently sensitive and specific to identify the dynamic differential recruitment of individual proteins as a response to developmental switches. In a proof-of-concept study, the approach of BAC-mediated expression of tagged proteins with a subsequent analysis of interacting proteins was successfully transferred to induced pluripotent stem cell (iPS)-derived lt-NES cells in order to enable PPI analyses in the context of complex diseases in following studies. Finally, an adeno-associated virus based approach for epitope tagging of endogenous genes in iPS-derived lt-NES cells from a patient suffering from Machado-Joseph disease allowed the generation of cell pools exhibiting both the diseased and healthy isoform of N-terminal FLAG-tagged Ataxin-3. The present work demonstrates a successful establishment of two different methods for protein tagging in somatic cell populations that subsequently can be employed for a multitude of analytical techniques including fluorescent microscopic visualization of protein localization, dynamics of protein recruitment or the detection of PPI

    The CIDOC CRM, an Ontological Approach to Schema Heterogeneity

    Get PDF
    The CIDOC Conceptual Reference Model (CRM), now ISO/CD21127, is a core ontology that aims at enabling information exchange and integration between heterogeneous sources of cultural heritage information, archives and libraries. It provides semantic definitions and clarifications needed to transform disparate, heterogeneous information sources into a coherent global resource, be it within a larger institution, in intranets or on the Internet. It is argued that such an ontology is property-centric, compact and highly generic, in contrast to terminological systems. The presentation will demonstrate how such a well-crafted core ontology can help to achieve a very high precision of schema integration at reasonable cost in a huge, diverse domain. It is further argued that such ontologies are widely reusable and adaptable to other domains which makes their development cost effective

    Towards a core ontology for information integration

    Get PDF
    In this paper, we argue that a core ontology is one of the key building blocks necessary to enable the scalable assimilation of information from diverse sources. A complete and extensible ontology that expresses the basic concepts that are common across a variety of domains and can provide the basis for specialization into domain-specific concepts and vocabularies, is essential for well-defined mappings between domain-specific knowledge representations (i.e., metadata vocabularies) and the subsequent building of a variety of services such as cross-domain searching, browsing, data mining and knowledge extraction. This paper describes the results of a series of three workshops held in 2001 and 2002 which brought together representatives from the cultural heritage and digital library communities with the goal of harmonizing their knowledge perspectives and producing a core ontology. The knowledge perspectives of these two communities were represented by the CIDOC/CRM [31], an ontology for information exchange in the cultural heritage and museum community, and the ABC ontology [33], a model for the exchange and integration of digital library information. This paper describes the mediation process between these two different knowledge biases and the results of this mediation - the harmonization of the ABC and CIDOC/CRM ontologies, which we believe may provide a useful basis for information integration in the wider scope of the involved communities

    From Understanding Genetic Drift to a Smart-Restart Parameter-less Compact Genetic Algorithm

    Full text link
    One of the key difficulties in using estimation-of-distribution algorithms is choosing the population size(s) appropriately: Too small values lead to genetic drift, which can cause enormous difficulties. In the regime with no genetic drift, however, often the runtime is roughly proportional to the population size, which renders large population sizes inefficient. Based on a recent quantitative analysis which population sizes lead to genetic drift, we propose a parameter-less version of the compact genetic algorithm that automatically finds a suitable population size without spending too much time in situations unfavorable due to genetic drift. We prove a mathematical runtime guarantee for this algorithm and conduct an extensive experimental analysis on four classic benchmark problems both without and with additive centered Gaussian posterior noise. The former shows that under a natural assumption, our algorithm has a performance very similar to the one obtainable from the best problem-specific population size. The latter confirms that missing the right population size in the original cGA can be detrimental and that previous theory-based suggestions for the population size can be far away from the right values; it also shows that our algorithm as well as a previously proposed parameter-less variant of the cGA based on parallel runs avoid such pitfalls. Comparing the two parameter-less approaches, ours profits from its ability to abort runs which are likely to be stuck in a genetic drift situation.Comment: 4 figures. Extended version of a paper appearing at GECCO 202

    DCC Digital Curation Manual: Instalment on Ontologies

    Get PDF
    Instalment on the role of ontologies within the digital curation life-cycle. Describes the increasingly important role of ontologies for digital curation, some practical applications, the topic’s place within the OAIS reference model, and advice on developing institution-specific selection frameworks

    From calculations to reasoning: history, trends, and the potential of Computational Ethnography and Computational Social Anthropology

    Get PDF
    The domains of 'computational social anthropology' and 'computational ethnography' refer to the computational processing or computational modelling of data for anthropological or ethnographic research. In this context, the article surveys the use of computational methods regarding the production and the representation of knowledge. The ultimate goal of the study is to highlight the significance of modelling ethnographic data and anthropological knowledge by harnessing the potential of the semantic web. The first objective was to review the use of computational methods in anthropological research focusing on the last 25 years, while the second objective was to explore the potential of the semantic web focusing on existing technologies for ontological representation. For these purposes, the study explores the use of computers in anthropology regarding data processing and data modelling for more effective data processing. The survey reveals that there is an ongoing transition from the instrumentalisation of computers as tools for calculations, to the implementation of information science methodologies for analysis, deduction, knowledge representation, and reasoning, as part of the research process in social anthropology. Finally, it is highlighted that the ecosystem of the semantic web does not subserve quantification and metrics but introduces a new conceptualisation for addressing and meeting research questions in anthropology
    • 

    corecore